Qwen3 30B A1.5B 64K High Speed NEO Imatrix MAX Gguf
An optimized version based on the Qwen3-30B-A3B Mixture of Experts model, improving speed by reducing the number of active experts, supporting 64k context length, and suitable for various text generation tasks.
Large Language Model Supports Multiple Languages